![]() Method and system to show priority information in a vehicle (Machine-translation by Google Translate
专利摘要:
Method, and associated system, for displaying priority information in a vehicle (1), comprising the steps of i) capturing at least a first image (51) of a field of view (22) of the user (2), ii) obtaining information of a vehicle environment (1), iii) determining a critical driving situation, iv) capturing at least a second image (52) of the environment of the vehicle (1), v) generating a virtual environment (6) of the user (2), where the virtual environment (6) generated is based on the critical driving situation determined, and vi) show the virtual environment (6) generated by means of the virtual reality device (3). It is possible to replace the mirrors by cameras (14, 36) and by virtual reality devices (3), combined with real information, in order to increase the versatility of the information presented to the user (2) from abroad (12) of the vehicle (1). (Machine-translation by Google Translate, not legally binding) 公开号:ES2704350A1 申请号:ES201731120 申请日:2017-09-15 公开日:2019-03-15 发明作者:Parejo Alejandro Moreno 申请人:SEAT SA; IPC主号:
专利说明:
[0001] [0002] Method and system to show priority information in a vehicle [0003] [0004] OBJECT OF THE INVENTION [0005] [0006] The present patent application has as its object a method for displaying priority information in a vehicle, according to claim 1, which incorporates remarkable innovations and advantages, together with a system for displaying priority information in a vehicle, according to claim 14. [0007] [0008] BACKGROUND OF THE INVENTION [0009] [0010] Current vehicles have cameras that capture images with information on the environment, in particular on the outside of the vehicle, such as the rear area of the vehicle. This information relevant to the user of the vehicle can be processed and presented through a virtual reality device. [0011] [0012] The known solution to improve the visibility of the driver of the vehicle towards its rear, is to implement additional cameras and screens, which implies additional costs, apart from the fact that spaces have to be reserved to place said screens and that said screens occupy fixed spaces inside the vehicle. [0013] [0014] On the other hand, the representation in the HUD (Head Up Display), or information visualizer projected on the windshield, of the images captured by cameras towards the outside of the vehicle, particularly towards the rear, would imply a reduction in quality, given that the projection space would be limited. Additionally, an additional system cost would be required. [0015] [0016] In this regard, it is known from the state of the art, as reflected in document US20160379411, a method and system for providing information to a driver of a vehicle, which allows the driver to govern the vehicle in a more secure manner. This information is presented to the driver as an augmented reality environment of so that the driver can visualize, through screens arranged inside the vehicle, images of the outside of the vehicle that facilitate driving and control of the external environment. These screens can be arranged by replacing an exterior side mirror of the vehicle or a central mirror of the vehicle. Therefore, the possibility of displaying augmented reality information that is relevant to the driver is presented, so that security is increased. Unlike the present invention, the screens are fixed and located in the same place as the current mirrors. [0017] [0018] Thus, in view of the foregoing, it is seen that there is still a need to have a method and system for displaying priority information in a vehicle, in particular from the exterior of the vehicle, so that images and information of the vehicle are displayed. Clearly, without interfering with other images and information present in the field of view of the user, according to their importance and the priority at each moment of driving, thus optimizing the representation of relevant information at the appropriate time. [0019] [0020] DESCRIPTION OF THE INVENTION [0021] [0022] The present invention consists in a method and system for displaying priority information in a vehicle, by means of a virtual reality device, in which images with relevant information are represented towards the user, framed in the visual field itself. Preferably, the virtual reality device will preferably consist of virtual reality glasses, or other equivalent system. [0023] [0024] Thus, the user, in order to enjoy the advantages of the present invention, has to put on virtual reality glasses, so that he visualizes the environment through the virtual reality images that are shown through said device. That is, the user does not directly visualize the environment, but rather observes everything through, in this case, virtual reality glasses. [0025] [0026] The virtual reality device preferably comprises at least one camera that captures the user's point of view, showing the user the image from said at least one camera. Before the representation of the content of the image captured by the camera, the system processes said image, as well as that coming from another or other cameras of the vehicle. This at least one camera preferably points towards the exterior of the vehicle and in particular, focused on the rear of the vehicle. In this way, it is possible to detect a critical driving situation that may represent a collision risk, showing the user driver of the vehicle some relevant image or information that warns of said risk. [0027] [0028] It is observed that the provision of projecting the information in the virtual reality device itself allows a greater degree of freedom in the presentation of images with respect to its presentation in at least one fixed screen of the vehicle. Thus, there is the possibility of doing it more noticeably and with more versatility, the user not being limited to looking at a specific area of the vehicle, such as the rear-view mirror, thus avoiding to divert the view of driving. In addition, it also saves screen infrastructure costs. [0029] [0030] In essence, the method of the present invention would comprise the following steps: [0031] - capture of information by cameras of the rear, side and / or frontal area of the vehicle; [0032] - capture of information through at least one inner camera that captures the field of vision of the driver; [0033] - processing the information recorded by the at least one inner camera, classifying the pixels of the image according to the interior, the exterior and / or additional objects; [0034] - processing information from cameras and / or sensors external to the vehicle in order to detect critical driving situations; [0035] - detect a collision risk; [0036] - generate a virtual image based on the information recorded by the at least one inner camera, by the information recorded by the external camera (s), depending on the detected collision risk; [0037] - show the virtual image on the screen of the virtual reality device. [0038] [0039] In particular, the invention consists in a method for displaying priority information in a vehicle, where a virtual reality device is used by a user inside the vehicle, where the method comprises the steps of: [0040] i) capturing at least one first image of a user's environment, where the at least one first image captured coincides with a user's field of vision, ii) ii) obtaining an information of an environment of the vehicle, [0041] iii) iii) determine a critical driving situation based on the vehicle environment information obtained, [0042] iv) iv) capture at least one second image of the environment of the vehicle, [0043] v) v) generate a virtual environment of the user, where the virtual environment comprises the at least one first image captured and the at least one second image captured, where the virtual environment generated is based on the critical driving situation determined, and [0044] vi) vi) display the virtual environment generated by means of the virtual reality device. [0045] [0046] Thus the invention refers to a method, and an associated system that allows to avoid collisions so that, when a collision risk is detected, it is projected into the virtual reality device, preferably virtual reality glasses, information to avoid collision, which results in an increase in the safety of the occupants of the vehicle. [0047] [0048] Specifically, the information projected on the virtual reality device consists mainly of the images captured by cameras external to the vehicle, so that the driver can be shown an extension of the rear area of the vehicle, of the side, or frontal, on the moment in which a collision risk is detected. In this way, the images can be displayed only at the moment when the danger appears, while giving the pertinent information so that the user can make the driving decisions, and avoid the collision, or minimize the risk situation. [0049] [0050] A critical driving situation is understood as an environmental event, an action of the driver, an action of a third vehicle or pedestrian with respect to the vehicle ... where the method determines that this information is a priority and that must be shown to the driver of the vehicle . By way of example, it can be a second image of the side of the vehicle showing a dead angle occupied by a third vehicle, or also a second image that enlarges a traffic signal that is not being respected, or also a second image received through of a wireless connectivity system, where the second image shows an accident or retention of traffic produced in front of the vehicle. [0051] [0052] It should be noted that the collision risk situation is detected through any of the current driving assistance systems, such as the lane change assistant, the parking assistant, or the blind spot assistant; or also communications systems such as vehicle communication with vehicle ("car 2 car"), or vehicle communication with infrastructure ("car 2 X"). The above are example no limitative, being able to add other elements or devices of data entry or sensors. [0053] [0054] On the other hand, mention that images from the outside of the vehicle are projected onto the virtual reality device with some or other properties depending on the degree of emergency, priority or level of collision risk. Thus, first images are captured that coincide with the user's field of vision or direction in which the user is looking. By way of example, if by cameras and sensors, a vehicle in the blind spot involves a high risk of collision, this second image or only the relevant part of this second image will be displayed on the virtual reality device. On the other hand, if the risk of collision is assessed as low, the image of said vehicle will be shown discreetly, or alternating with other information. [0055] [0056] To show the priority information efficiently and in a practical way for the user, a virtual environment is generated. Virtual environment is understood as the space that surrounds the user of a virtual reality system, generated from the first images captured from the real environment, these first images being previously processed. Additionally, second images will be superimposed or added, according to an assessment of the critical driving situation. This superposition will be made taking into account a series of parameters of the first and second images, depending on the dangerousness of the critical driving situation determined. These properties may consist of varying the position of the second image on the screen of the virtual reality device, the intensity of the second image, its degree of transparency, its proximity or distance, its size ... Thus, by way of example, the greater the risk, the greater the size, the more centered its position, more intense and superimposed with respect to the at least one first image of the user's environment. [0057] [0058] Optionally, the user can select the position of the second images corresponding to the external camera, in the part of the field of vision that is most convenient, keeping within safety margins. The user can also select the size of the image, also staying within a margin of safety. Both the size and the position of the image can be changed automatically after some action or driving event. To quote as an example the possibility of increasing the size of the second image when the user places the intermittent, understanding that the driver is ready to change lanes, needing to observe the possible presence of vehicles in the rear. [0059] [0060] Also point out as additional advantages, that the virtual reality device, preferably virtual reality glasses, represents an economic saving compared to the option of incorporating additional screens in the vehicle's interior. On the other hand, being a virtual representation, it does not require a reservation of physical space inside the vehicle's interior. And as mentioned, it presents the flexibility, that the user can modify the position and size of the image depending on driving factors or user preferences. [0061] [0062] According to another aspect of the method of the invention, the step of generating a virtual environment comprises superimposing the at least one second image on the at least one first image. By superimposing a second image on a first image it is understood that the at least one fragment of the second image is superimposed on at least one fragment of the first image. In a variant, it is possible to contemplate the option of replacing a fragment of the first image with at least one fragment of the second image, instead of superimposing it. The advantage of said superposition or substitution between images is an increase in the versatility of the information that is presented to the user of the vehicle, while it is possible to modify information and combine them, that is, it does not need to choose at all between presenting the second image to the detriment of hiding the first image. [0063] [0064] More particularly, the step of generating a virtual environment comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of the at least one second image. In this way, it is possible to present the images in different ways in situations of driving of different hazards and risks. For example, the present invention allows to increase or decrease a relevance with which the second image is displayed in the generated virtual environment, so that if the degree of priority of the information is determined as critical, the level of transparency of the second image it is diminished. On the other hand, if the degree of priority of the information is determined to be uncritical, the transparency level of the second image is increased. [0065] [0066] Advantageously, the step of generating a virtual environment comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of the at least one first image, in order to emphasize and highlight the importance of the second image. Optionally also it is possible to partially modify the first image in the area in which the second image is superimposed so that the second image is highlighted and visualized more clearly, and without any type of distortion or interference. [0067] [0068] According to another aspect of the invention, the step of generating a virtual environment comprises defining a dimension of the at least one second image, so that it is possible to set the size with which said second image will be represented in the virtual reality device based on to the degree of priority of the critical driving situation determined. [0069] [0070] In a preferred embodiment of the invention, the step of generating a virtual environment comprises incorporating at least one additional virtual information, wherein the additional virtual information is incorporated in the at least one first image and / or in the at least one second image, of so that the user of the vehicle can be provided with additional information such as augmented reality, being able for example to modify the color of a contour of a third vehicle with which a critical driving situation exists, or to add in the first or second image a speed information of a third vehicle ... [0071] [0072] Specifically, the method of the invention comprises a step of determining the user's field of vision, where the step of generating a virtual environment comprises establishing a position of the at least one second image in the virtual environment based on the user's field of vision. determined. In this way, the position of the additional information can be adjusted to the type of information in question, and to the driving needs, based on the degree of criticality of the determined situation. So if the degree of priority is low, the second image is superimposed on an area of the first image of little relevance, as a corner or side of the user's field of vision. On the contrary, if the degree of priority is high, the second image is superimposed on a central area of the user's field of vision. [0073] [0074] On the other hand, establishing the position of the at least one second image comprises displacing the position of the second image in the virtual environment according to a variation of the determined user's field of vision, so that the second image remains superimposed on the first image. image, regardless of whether the field of vision has changed. Thus, the second image remains always visible and perceptible. So, in the event that the user rotate the head and vary the field of vision, the second image changes the position in the virtual environment according to the variation of the user's field of vision, so that the second image always appears visible. [0075] [0076] According to another embodiment of the invention, establishing the position of the at least one second image comprises maintaining the position of the at least one second still image in the virtual environment. In this way, a variation of the field of vision of a given user does not affect the position of the second image in the virtual environment. Consequently, said second image may come out of the virtual environment shown to the user, being visible only when the user's field of vision points to the area where the second image is fixed. [0077] [0078] Specify that the position of the second image is seen as fixed in space, that is, if the image is placed in the driver's window, they will always appear at that point, even if the user moves the head. If this movement is in particular to the right to look at the companion, and exceeds a certain angle of rotation, the image would disappear from its field of vision. [0079] [0080] According to another aspect of the invention, the at least one first image comprises at least a first segment, wherein the method comprises a step of classifying the at least one first segment of the at least one first image, or as belonging to the interior of the vehicle, either as belonging to the exterior of the vehicle, or as corresponding to an additional object of the vehicle. [0081] [0082] By "first segment" is meant a portion or part of an image. The subdivision of an image into portions or first segments can be done by image processing, dividing the image by colors, volumes, geometries, contours ... thereby obtaining the plurality of first segments that make up the image. Alternatively, the subdivision of the image can be by pixels or another equivalent. Thus the term segment of an image is equivalent to the pixel of said image, being able to be replaced by one another, and interpreted both under the same meaning. [0083] [0084] Thus, the system that executes the method of the present invention, divides the images into three parts, in order to be able to process the correct part, classifying the pixels as corresponding to the interior, that is, to the interior of the vehicle, as corresponding to the outside of the vehicle, in particular on the area corresponding to the windows of the window, and as corresponding to additional objects, or not belonging to the passenger compartment, such as packages, passengers or body parts of the user own user of the virtual reality device . [0085] [0086] Additionally, virtual environment is understood as the space that surrounds the user of a virtual reality system, generated from images captured from the real environment, and these images can be captured previously processed. In this way, just as an image captured from the environment comprises at least a first segment, the virtual environment comprises at least one virtual segment, the virtual segment being understood as the subdivision, portion or part of the virtual environment. Thus the term segment of an image is equivalent to the pixel of said image, being able to be replaced by one another, and interpreted both under the same meaning. [0087] [0088] According to another aspect of the method of the invention, the step of classifying the at least one first segment is based on a location of at least one element of the user's environment with respect to the virtual reality device, wherein the method additionally comprises the steps of: [0089] a) obtaining at least one real vector, wherein the at least one real vector comprises a module and an address between the virtual reality device and the at least one element of the user's environment, [0090] b) determine a position of the virtual reality device in the vehicle, [0091] c) assigning the at least one real vector to the at least one first segment, and [0092] d) comparing the at least one real vector with at least one theoretical vector, where the at least one theoretical vector is previously known. Subsequently, the at least one first segment of the at least one image is classified and labeled as the interior of the vehicle, the exterior of the vehicle, or the additional object of the vehicle, based on the comparison of the at least one real vector with the at least one vector theoretical. [0093] [0094] Additionally, the at least one theoretical vector is based on an arrangement of the vehicle interior and the position of the virtual reality device in the vehicle, where the arrangement of the vehicle interior is previously known. Point out that the layout of the vehicle's interior is like a 3D map (3 dimensions) of the interior, so that knowing the position of the virtual reality device in said 3D map, it is known all the theoretical vectors between the virtual reality device and the at least one element of the environment. [0095] [0096] On the other hand, the method of the invention comprises a further step of determining at least one window area in the at least one captured image, wherein the step of determining the at least one window area comprises recognizing at least one geometric shape predefined by means of image processing and / or determining at least one marker of the at least one image, wherein the marker comprises a predefined color and / or comparing the at least one real vector with the at least one theoretical vector. For a preferred embodiment there may be means for facilitating the position of the windows or the window area, such as window frames painted with a predetermined color and determined by a step of processing the at least one image of the environment of the captured user. [0097] [0098] As anticipated, the window area corresponds to the windshield as glazed or transparent surface, and, additionally, to the side and rear windows of the passenger compartment. Note that in the comparison of the real vector with the theoretical vector, both angles and modules coincide, thus defining that the first segment corresponds to the window area. [0099] [0100] Based on the foregoing, specify that the at least one first segment is classified as outside of the vehicle if the at least one first segment is arranged in the at least one window area in the at least one image. [0101] [0102] Additionally, the at least one first segment is classified as inside the vehicle if the actual vector of the at least one first segment is substantially the same as the theoretical vector, where both angle and vector modules coincide. It is noted that in this case, the theoretical vector is not defined as a window area. [0103] [0104] On the other hand, the at least one first segment is classified as an additional vehicle object if the real vector module of the at least one first segment is smaller than the theoretical vector module. [0105] [0106] In a particular embodiment of the invention, the step of determining a position of the virtual reality device in the vehicle comprises determining a location of the user's head by means of image processing and / or determining a location of at least one reference point of the virtual reality device by means of image processing and / or determining a location of the virtual reality device by means of triangulation and / or determining a location of the virtual reality device by means of at least one inertial system arranged in the virtual reality device. [0107] [0108] According to another particular aspect of the invention, the step of generating a virtual environment of the user is additionally based on the classification of the at least one first segment of the at least one first image. Also properties such as brightness, position and dimension depend additionally on said classification of the first segment. [0109] [0110] In a specific embodiment of the invention, the at least one second image comprises at least a second segment, wherein the step of superposing the at least one second image to the at least one first image comprises overlaying the at least one second segment of the second image. image to the first segment of the interior of the vehicle and / or to the first segment of the exterior of the vehicle and / or to the first segment of additional object of the vehicle. In this way, the superposition of the second image is adjusted in its properties according to the part of the visual field to which the first segment corresponds. In this way, in the event that a critical driving situation is determined to be a low priority, the at least one second image may be superimposed only on the segments of the first image classified as inside the vehicle. On the other hand, in the event that a critical driving situation is determined as a high priority, the at least one second image may be superimposed on the segments of the first image classified as additional objects of the vehicle or, even, to the segments of the first image classified as the exterior of the vehicle. [0111] [0112] According to another aspect of the invention, the step of moving the position of the second image in the virtual environment comprises superimposing the at least one second segment of the second image to successive first segments of the interior of the vehicle and / or successive first segments of the exterior of the vehicle. vehicle and / or successive first segments of additional vehicle object. In this way, the displacement of the second image is adjusted in its position according to the part of the visual field corresponding to said first segment, without overcoming segments of higher priority than the second image itself, as could be the exterior of the vehicle. [0113] Another object of the present invention is a system for displaying virtual reality information in a vehicle, where the system comprises: [0114] - a virtual reality device, where the virtual reality device comprises at least one screen, [0115] - at least one first camera, wherein the at least one first camera is configured to capture at least one first image, where the at least one first image coincides with a field of view of the user, [0116] - at least one sensing means, wherein the at least one sensing means is configured to obtain an information of an environment of the vehicle, [0117] - at least one second chamber, wherein the at least one second chamber is configured to capture at least one second image of an environment of the vehicle, and - at least one processing unit, where the at least one processing unit is configured to determine a critical driving situation based on the information of the environment of the vehicle, and generate a virtual environment of the user, where the virtual environment comprises the at least one first image captured and the at least one second image captured, where the virtual environment generated is based on the critical driving situation determined. [0118] [0119] The same advantages derive from said system as from the above-mentioned method, inasmuch as the processing unit executes its stages, in the vicinity of a vehicle interior. Thus, the virtual reality device is capable of being carried by the head of a user of the vehicle, thereby facilitating the virtual reality device capturing the field of view of the user through the at least one first camera. [0120] [0121] Preferably, and as has been pointed out, the virtual reality device is usually virtual reality glasses, which, in a preferred embodiment, comprise a high resolution camera, high quality autofocus systems, at least one night camera and / or infrared and / or with a wide-angle lens, and a system capable of determining the distance to all the objects in the image, such as a Lidar (Light Detection and Ranging or Laser Imaging Detection and Ranging), that is, a device that allows to determine the distance from a laser emitter to an object or surface using a pulsed laser beam. Thus, a Lidar type sensor can be arranged adjacent to the at least one first camera, so that the at least one segment of the image is paired with an actual vector. [0122] This way you can detect the presence of another vehicle in the area of the field of vision corresponding to the blind spot. When the user invades lane or intermittently activates, it is possible to make the visual information corresponding to the at least one second camera in the virtual environment of the user, regardless of where the driver looks. The second image captured by the second camera, with more or less large size, with or without a degree of transparency, and superimposed or not depending on the situation, is shown by means of the virtual reality device and independently of the user's field of vision. driving criticism determined and / or the classification of at least a first segment of the first image. [0123] [0124] According to another aspect of the invention, the system comprises at least one position detection means, wherein the position detection means is configured to determine a position of the virtual reality device in the vehicle. This way you can know the position and orientation of the virtual reality device inside the vehicle, which, among other things, allows you to classify in layers, sections or segments, the first image, and also determine the field of vision that is perceiving the user at every moment. [0125] [0126] Note that, therefore, the arrangement of the interior of the vehicle is previously known in a preferred embodiment. This means knowing the plane or map of the interior of the vehicle and, therefore, the theoretical distances of each object in the environment according to the position occupied by the virtual reality device inside the vehicle. [0127] [0128] As for the positioning and orientation system with which the virtual reality device of the present invention has, one or more of the following systems can be used: [0129] - Cameras that record the position and orientation of the device or virtual reality glasses, based on processing of image forms, for example the shape of the head, either by placing reference points on the device or virtual reality glasses, for example a camera that only records the violet color, the device or glasses having violet reference points, [0130] - Positioning through triangulation, using technologies based on electromagnetic or infrared waves. To determine the orientation, it is only necessary to place 3 transmitters and / or receivers in the device or virtual reality glasses, so that the orientation in the space can be determined. The space or interior of the environment must also have at least three transmitters and / or receivers that perform the function of base station. [0131] - Reference images. The carrier or object can have reference points that when captured by the camera of the device or virtual reality glasses, leads to determine the position and orientation of it. [0132] In order to help the orientation of the device or virtual reality glasses, it can have a system of accelerometers and gyroscopes. [0133] [0134] The attached drawings show, by way of non-limiting example, a method and system for displaying priority information in a vehicle, constituted according to the invention. Other characteristics and advantages of said method and system for displaying priority information in a vehicle, object of the present invention, will be evident from the description of a preferred, but not exclusive, embodiment, which is illustrated by way of non-limiting example in the accompanying drawings, in which: [0135] [0136] BRIEF DESCRIPTION OF THE DRAWINGS [0137] [0138] Figure 1 .- It is a perspective view of the interior of a vehicle, according to the present invention. [0139] Figure 2.- It is a first-person view of the field of vision of a user from the position of the driver in the interior of a vehicle, according to the present invention. [0140] Figure 3.- It is a perspective view of a virtual reality device, according to the present invention. [0141] Figure 4A.- It is a perspective view of a virtual reality device in a first position, according to the present invention. [0142] Figure 4B.- It is a perspective view of a virtual reality device in a second position, according to the present invention. [0143] Figure 5A.- It is a perspective view of the first row of seats of the passenger compartment of a vehicle with two users carrying their respective virtual reality devices, according to the present invention. [0144] Figure 5B.- It is a perspective view of the field of vision of the driver in the interior of a vehicle, according to the present invention. [0145] Figure 6A.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, with a second image in a first position of the approximate field of view to which a rear view mirror would have. vehicle, according to the present invention. [0146] Figure 6B.- It is a perspective view of the virtual environment that the driver observes in the passenger compartment of a vehicle through the virtual reality device, after a first turn of the driver's head, with a second image in a second position of the field of displaced vision of the first position, according to the present invention. [0147] Figure 6C.- It is a perspective view of the virtual environment observed by the driver in the interior of a vehicle through the virtual reality device, after a second turn of the driver's head, with a second image in a third position of the field of vision and with at least one altered visual property, according to the present invention. Figure 6D.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, after a third turn of the driver's head, with a second image in a fourth position of the field of vision and with at least one altered visual property, according to the present invention. [0148] Figure 7A.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, with a second image in a fixed first position, approximated to that which would have a rear-view mirror of the vehicle, according to the present invention. [0149] Figure 7B.- It is a perspective view of the virtual environment observed by the driver in the interior of a vehicle through the virtual reality device, after a first turn of the driver's head, with a second image in a first fixed position, according to the present invention. [0150] Figure 7C.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, after a second turn of the driver's head, with a second image in the first fixed position, leaving partially from the user's field of view, according to the present invention. [0151] Figure 7D.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, after a third turn of the driver's head, with a second image in the first fixed position, having completely out of the user's field of vision, according to the present invention. [0152] Figure 8A.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, with a second image in a first position of the field of vision approximate to that which would have a rear-view mirror of the vehicle, according to the present invention. [0153] Figure 8B.- It is a perspective view of the virtual environment observed by the driver in the interior of a vehicle through the virtual reality device, after a first turn of the driver's head, with a second image in a first fixed position, according to the present invention. [0154] Figure 8C.- It is a perspective view of the virtual environment observed by the driver in the interior of a vehicle through the virtual reality device, after a second turn of the driver's head, with a second image in third position of the driver's field. vision, and with at least one altered visual property, according to the present invention. [0155] Figure 8D.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, after a third turn of the driver's head, with a second image in a fourth position and with at least one altered visual property, according to the present invention. [0156] Figure 9A.- It is a perspective view of the virtual environment observed by the driver in the interior of a vehicle through the virtual reality device, with a second image and with additional objects to the vehicle superimposed on the second image, in accordance with the present invention [0157] Figure 9B.- It is a perspective view of the virtual environment observed by the driver in the passenger compartment of a vehicle through the virtual reality device, with a second image and with the second image superimposed on the objects additional to the vehicle, in accordance with the present invention. [0158] [0159] DESCRIPTION OF A PREFERRED EMBODIMENT [0160] [0161] In view of the mentioned figures and, according to the numbering adopted, one can observe in them an example of a preferred embodiment of the invention, which comprises the parts and elements that are indicated and described in detail below. [0162] [0163] As can be seen in Figure 5A, the system and method of the present invention is based on projecting virtual reality information by means of a virtual reality device 3. The virtual reality device 3 is preferably arranged on the head 21 of the user 2, both of a driver and a passenger when they are inside a vehicle 1. [0164] By way of summary, the method of the present invention performs the following actions, in a preferred embodiment: [0165] - capturing at least one first image 51 by means of at least one first camera 31, where the at least one image 5 coincides with the field of vision 22 of the user, - obtain information of the vehicle environment, so that both parameters are analyzed of the vehicle as information external to the vehicle, so that a critical driving situation can be determined, [0166] - determining a critical driving situation, so that a priority information must be shown in the virtual reality device 3 preferably to the at least one captured image 5, [0167] - capturing at least a second image 52, where the at least one second image 52 shows a critical driving situation, [0168] - generating a virtual environment 6 of the user 2, where the virtual environment 6 comprises the first image 51 that coincides with the visual field 22 and the second image 52, so that the critical driving situation is shown in the user's field of vision , [0169] - altering at least one visual property of the first image 51 and / or of the second image 52, so that the priority information displayed to the user 2 can be highlighted and positioned according to the field of vision 22 of the user 2 and the priority decreed of the critical driving situation, and [0170] - display the virtual environment 6 generated by means of the virtual reality device 3. [0171] [0172] Point out that in a first variant the virtual environment 6 would be displayed on a screen 37 of the virtual reality device or glasses 3, and in a second variant it would be performed on the virtual reality device's own lens or glasses 3. [0173] [0174] Figure 1 shows, illustratively, the interior of a vehicle 1, with a processing unit 4 and a memory unit 41, preferably located under the dashboard. A plurality of position detection means 7 of the virtual reality device 3 are also observed in the vehicle 1. An example for positioning the virtual reality device 3 is by means of transceivers, for example of infrared or electromagnetic waves. Thus, by means of a triangulation process and knowing the time of emission and response of said waves with devices located in known locations of the vehicle, its position can be determined precisely. [0175] Figure 2 shows, in an illustrative way, a first-person view of the field of view 22 of a user 2 from the position of the driver in the interior of a vehicle 1. In the same can be seen the position of the unit 4, and the memory unit 41, as well as the areas of the field of view 22, classified as interior 11, exterior 12 and window area 15. Also an example of locating a second chamber 14 that obtains second images 52 of the environment of the vehicle 1 and of a third chamber 36, in the usual space intended for a rear-view mirror. [0176] [0177] Figure 3 shows, in an illustrative way, a perspective view of a virtual reality device 3. Said virtual reality device 3 is preferably virtual reality glasses. Virtual reality glasses preferably comprise a first camera 31 for capturing the at least one first image 51, at least one distance sensor 9, where the at least one distance sensor 9 is configured to obtain at least one distance between the user 2 and the objects of the environment, an accelerometer 34 and a gyroscope 35 in order to determine a position of the virtual reality device 3 in the vehicle 1, as well as a processing unit 4. Thus, the virtual reality device 3 knows where it is positioned, and knows the distance to each point of the vehicle interior 1. In addition, a screen 37 allows the user 2 to be shown the virtual environment 6 generated. [0178] [0179] Figure 4A shows, illustratively, a virtual reality device 3 in a first position, which corresponds to a top view of the virtual reality glasses. Figure 4B shows, illustratively, a virtual reality device 3 in a second position, corresponding to a side view of the virtual reality glasses. In order to position the virtual reality device 3 in the environment, it is possible to establish marks or beacons on the glasses that serve as reference points. In figure 4A the marks are arranged in the upper area of the frame. In figure 4B the marks are arranged in the lateral zone of the rods. By means of cameras arranged inside the vehicle, the position and orientation of said marks is determined, thus positioning the virtual reality device 3. [0180] [0181] According to another aspect of the invention, the method of the present invention classifies at least a first segment 511 of the first images 51 obtained by the first camera 31. The at least one first segment 511 is classified according to: [0182] - interior 11 of vehicle 1, [0183] - exterior 12 of vehicle 1, and [0184] - additional object 13, of the vehicle. [0185] [0186] Thus, the classification of the first segments 511 is carried out based on a location of at least one element of the environment of the user 2 with respect to the virtual reality device 3, where the method additionally comprises the steps of: [0187] a) obtaining at least one real vector 23, wherein the at least one real vector 23 comprises a module and an address between the virtual reality device 3 and the at least one element of the environment of the user 2, [0188] b) determining a position of the virtual reality device 3 in vehicle 1, [0189] c) assigning the at least one real vector 23 to the at least one segment 52, and [0190] d) comparing the at least one real vector 23 with at least one theoretical vector 24, where the at least one theoretical vector 24 is previously known. [0191] [0192] In figure 5A it is possible to observe, in an illustrative way, a perspective view of the first row of seats of the passenger compartment of a vehicle with two users 2 carrying their respective virtual reality devices 3, in accordance with the present invention. It is observed schematically how the virtual reality device captures first images 51 of the environment, coinciding with the field of vision 22 of the user 2. In addition, at least one distance sensor 9 captures at least one module and one direction between the device. virtual reality 3 and the elements that are in the environment of the user 2, so that a plurality of real vectors 23 are defined. Each first segment 511 of the image 5 has associated at least one real vector 23, so that it is known a relative position of the at least one first segment 511 with the user 2. [0193] [0194] Additionally, at least one position detection means 7 allows knowing the position of the virtual reality device 3 inside the vehicle 1. Knowing the position is essential to locate the user 2 in a previously known three-dimensional map of the vehicle 1. From this Thus, a plurality of theoretical vectors 24 will be known, indicating the relative position between the objects located in the environment of the user 2 and the user 2. A comparison between the plurality of theoretical vectors 24 and the plurality of real vectors 23 will allow to classify the plurality of first segments 511 or additional objects of the image 5, being able in this way to generate a virtual environment 6 modified to the specific needs of the user 2. [0195] In order to determine the first segments 511 of the first image 51 representing the exterior 12 of the vehicle 1, at least one window area 15 is determined in the at least one image 5 captured. It is based on recognizing at least one predefined geometric shape by means of image processing and / or determining at least one marker of the at least one first image 51, wherein the marker comprises a predefined color and / or comparing the at least one real vector 23 with the at least one theoretical vector 24. Note that the window area 15 corresponds to the windscreen, or to any glazed or transparent surface of the vehicle 1. Thus, at least a first segment 511 is classified as exterior 12 of the vehicle 1 if the less a first segment 511 is arranged in the at least one window area 15 in the at least one first image 51. [0196] [0197] The image 5B shows a first virtual environment 6 of the generated user 2. It is noted that this first virtual environment 6 does not present any modification with respect to the real environment of the user 2. Thus, a field of vision 22 of a driver user 2 can be observed in the interior of a vehicle 1, where the different zones can be seen of the field of vision 22, classified as pixels or first segments 511 corresponding to the interior 11 of the vehicle 1 or passenger compartment, for example a dashboard of the vehicle 1, pixels corresponding to additional objects 13 not belonging to the interior 11 of the vehicle 1, in this case the hands of the driver user 2, or passenger compartment, and pixels corresponding to the exterior 12 of the vehicle 1, that is, to the part of the first image 51 that is behind the windshield. [0198] [0199] Specify that the present invention classifies the pixels of the first image 511 according to at least the following layers of the exterior 12, interior 11 and additional object 13 of the vehicle 1. The exterior 12 corresponds to the captured pixels that are positioned in the window area 15 or crystals. Therefore, everything that is captured and, according to the 3D arrangement, indicates that it is crystal or the window area 15, is equivalent to the exterior 12, as long as the distance of that pixel is equal to the real distance. As for the interior 11 of the vehicle 1, or passenger compartment, the actual distance must coincide with the theoretical distance according to the 3D arrangement. As for the additional object 13, outside the interior 11 of the vehicle 1, or passenger compartment, the actual distance must be smaller than the theoretical distance according to the 3D arrangement. [0200] [0201] Starting from an environment like the one shown in figure 2, where this environment of the user 2 is captured by means of the first camera 31, a critical situation of driving. A critical driving situation can be an invasion of a lane by the vehicle 1, an imminent situation of collision or contact between the vehicle 1 and a third vehicle, a third vehicle located in the blind spot of the vehicle 1 and not seen on the other hand of the driver, a pedestrian next to the line of the vehicle 1 ... [0202] [0203] Any of these critical driving situations can be detected by means of vehicle sensors, such as distance sensors, presence sensors, sensors that parameterize the vehicle's path 1. When a critical driving situation is detected, a virtual environment is generated that it comprises at least one second image 52 of the exterior of the vehicle superimposed on the at least one first image 51. [0204] [0205] As the at least one first image 51 comprises at least a first segment 511, that is, at least one area, part or pixel of the first image 51, the at least one second image 52 comprises at least a second segment 521. Thus, overlaying the at least one second image 52 to the at least one first image 51 comprises overlaying the at least one second segment 521 of the second image 52 to the at least one first segment 511 of the first image. It may also mean replacing the at least one first segment 511 with the at least one second segment 521. [0206] [0207] Figure 6A shows, in an illustrative way, a perspective view of the virtual environment 6 observed by the driver in the passenger compartment of a vehicle 1 through the virtual reality device 3, with a second image 52 in a first position of the vehicle. field of vision 22 approximated to that which would have a rear-view mirror of the vehicle. You can also see the various areas of the field of vision 22 of the user 2, which are the interior 11 and the exterior 12. A first image 51 is observed, with at least a first segment 511, a second image 52, with at least one second segment 521. Virtual environment 6 comprises at least one virtual segment 61. [0208] [0209] Because the vehicle 1 has made an invasion of a left lane, where a third vehicle circulates in this left lane, a critical driving situation has been determined. Therefore, the second image 52 is represented in the virtual environment 6 in order that the user 2 receives priority information of a critical driving situation. In this case, the second image 52 shows an exterior image 8 of the vehicle 1, in particular to which it would be seen by a rear-view mirror. This represents a driving aid since, when the driver changes lanes, he usually looks at the rear-view mirrors. [0210] It is noted that the second image 52 of the outer zone 12 of the vehicle 1 shown in virtual reality device 3 occupies a determined position depending on the priority of the critical driving situation. Thus, the second image 52 is displayed within the virtual environment 6 in the zone where the collision risk occurs. Additionally, the position of the second image 52 in the virtual environment 6 is based on the field of view 22 of the user 2, so as not to interfere with another important or preferred element. [0211] [0212] In addition to the position, the second image 52 comprises an alteration of brightness, color, contrast brightness and / or saturation, so that said visual parameters of the second image 52 can be modified in order to bring greater relevance or clarity to the content of the information shown. Additionally, displaying the second image 52 comprises defining a dimension according to the priority of the critical driving situation determined. [0213] [0214] In a first embodiment of the method of the invention, a position of the second image 52 varies as a function of the field of vision 22 of the user 2. Thus, said second image 52 does not occupy a fixed position of the environment, as could be a position occupied by a vehicle screen 1 or a rear view mirror of the vehicle 1, but the second image 52 is displaced according to a variation of the field of vision 22 of the determined user 2. For example, if the user 2 is looking backwards or the side because it is talking to another passenger, or simply distracted, it can be shown an image of a second camera 14 that focuses forward, if it is considered that there is a risk of collision with the vehicle 1 in front. On the other hand, if in said driving situation, the user 2 is already facing forward, it will not be necessary to show him this information from the second camera 14 that focuses forward. [0215] [0216] In figure 6B it is possible to observe, in an illustrative way, a perspective view of the virtual environment 6 observed by the driver in the interior of a vehicle 1 through the virtual reality device 3, after a first turn of the head 21 to the right. The second image 52 shown in the virtual environment 6 has shifted to the right according to the variation of the field of view 22, with respect to the first position it occupied in figure 6A. In said figure 6B the same elements mentioned for figure 6A are included. [0217] In figure 6C one can observe, in an illustrative way, the same view as in figures 6A and 6B, where the user 2 has made a second turn of the head 21 to the right, greater than the first turn. The second image 52 shown in the virtual environment 6 has shifted to the right according to the variation of the field of view 22, with respect to the second position it occupied in figure 6B. [0218] [0219] In FIG. 6C, the superimposition of the second image 52 to the first image 51 is additionally performed based on the classification of the at least one first segment 511 of the first image. Thus, it is determined that the second image 52 displaced according to the variation of the field of view 22, would be partially superimposed with elements of the interior 11 of the passenger compartment. In the generation of the virtual environment 6 a higher priority of the second segments 521 of the second image 52 is determined than the priority of the first segments 511 classified as interior 11 of the vehicle 1. Consequently, the second segments 521 are superimposed on the first segments 521. segments 511 classified as interior 11 of vehicle 1. Still, said second image 52 can be represented transparently, so as to avoid completely hiding this previously defined content. [0220] [0221] In figure 6D one can observe, in an illustrative way, the same view as in figures 6A, 6B and 6C, where the user 2 has made a third turning of the head 21 to the right, greater than the second turn. The second image 52 shown in the virtual environment 6 has moved to the right according to the variation of the field of view 22, with respect to the second position it occupied in figure 6C. [0222] [0223] Additionally, in the generation of the virtual environment 6 a higher priority of the first segments 511 classified as external 12 of the vehicle is determined than the priority of the second segments 521 of the second image 52. Additionally, a higher priority of the second segments 521 of the second image 52 that the priority of the first segments 511 classified as interior 11 of the vehicle 1. Therefore, the second image 52 represented in the virtual environment 6 has shifted to the right according to the variation of the field of view 22, but it has also moved towards the lower area, so that a window area 15 is not superimposed by the second image 52. This prevents the user from visualizing the exterior 12 of the vehicle 1, which could be critical for driving the vehicle. In addition, the second image 52 is shown transparently, to avoid completely hiding the first segments 511 classified as interior 11 of the vehicle 1. [0224] According to this first embodiment, the second image 52 will be displaced in the virtual environment 6, so that the second segments 521 of the second image 52 will overlap successive first segments 511 of the interior 11 of the vehicle 1. In addition, overlapping will be avoided second segments 521 of the second image 52 to first segments 511 of the exterior 12 of the vehicle 1. [0225] [0226] Also, in Figures 7A, 7B, 7C and 7D a second embodiment is shown, in terms of the strategy of generating the virtual environment 6. According to said second embodiment, the second image 52 occupies a fixed area in the virtual environment 6, so that a variation of the field of vision 22 of the determined user 2 does not affect the position of the second image 52 in the virtual environment 6. Thus, when the user 2 looks in that particular area, he can observe the second image 52. Otherwise, if the field of view 22 of the user 2 does not match the second image 52, the user 2 will not see the second image 52. [0227] [0228] Figure 7A is equivalent to Figure 6A, where a critical driving situation has been determined because the vehicle 1 has made an invasion of a left lane, where a third vehicle circulates in this left lane. Therefore, the second image 52 is represented in the virtual environment 6 in order that the user 2 receives priority information of the critical driving situation. In this case, the second image 52 shows an exterior image 8 of the vehicle 1, in particular to which it would be seen by a rear-view mirror. [0229] [0230] In figure 7B one can observe, in an illustrative way, the same view as in figure 7A, where the user 2 has made a first turn of the head 21 to the right. As seen, the field of view 22 of user 2 has varied. On the other hand, the relative position of the second image 52 with respect to elements of the vehicle, as components of the interior 11 of the vehicle, remains constant. [0231] [0232] In figure 7C it is possible to observe, in an illustrative way, the same view as in figures 7A and 7B, after a second rotation of the head 21 to the right, greater than the first turn. As seen, the field of view 22 of user 2 has varied. In contrast, the position of the second image 52 remains fixed in the virtual environment 6, partially leaving the field of view of the user, according to the present invention. [0233] In figure 7D it is possible to observe, in an illustrative way, the same view as in figures 7A, 7B and 7C, after a third turn of the head 21 to the right, greater than the second turn. Again, the visual field 22 of user 2 has varied. In contrast, the second image 52 remains fixed in a position approximate to that which would have a mirror of the vehicle 1. Consequently, the second image 52 has completely left the field of vision 22 of the user 2. It is observed that when the area of the space where the second image 52 is located leaves the field of view 22, the second image 52 will stop being projected in the virtual environment 6. [0234] [0235] Also, in FIGS. 8A, 8B, 8C and 8D a third embodiment is shown, in terms of the strategy of generating the virtual environment 6. Said third embodiment is based on a combination of the first mode and the second mode of operation. realization. According to said third embodiment, the second image 52 occupies a fixed area in the virtual environment 6, so that a variation of the field of vision 22 of the determined user 2 does not affect the position of the second image 52 in the virtual environment 6 Said fixed area is respected while the second image 52 is within the field of view 22 of the user 2. Thus, if the field of view 22 of the user 2 varies and the second image disappears, at least partially, from the field of view 22. of the user 2, the second image 52 is shifted according to the variation of the field of view 22 of the user 2. [0236] [0237] Figure 8A is equivalent to Figure 6A and 7A, where a critical driving situation has been determined because the vehicle 1 has made an invasion of a left lane, where a third vehicle circulates in this left lane. Therefore, the second image 52 is represented in the virtual environment 6 in order that the user 2 receives priority information of the critical driving situation. In this case, the second image 52 shows an exterior image 8 of the vehicle 1, in particular to which it would be seen by a rear-view mirror. [0238] [0239] Figure 8B shows, illustratively, the same view as in Figure 8A, where the user 2 has made a first turn of the head 21 to the right. As seen, the field of view 22 of user 2 has varied. On the other hand, the relative position of the second image 52 with respect to elements of the vehicle, as components of the interior 11 of the vehicle, remains constant. [0240] [0241] Figure 8C shows, illustratively, the same view as in Figures 8A and 8B, where the user 2 has made a second turn of the head 21 to the right, greater than the first turn. The second image 52 shown in the virtual environment 6 has moved to the right according to the variation of the field of view 22, with respect to the second position it occupied in figure 8B. [0242] [0243] In FIG. 8C, the superimposition of the second image 52 to the first image 51 is additionally performed based on the classification of the at least one first segment 511 of the first image. Thus, it is determined that the second image 52 displaced according to the variation of the field of view 22, would be partially superimposed with elements of the interior 11 of the passenger compartment. In the generation of the virtual environment 6 a higher priority of the second segments 521 of the second image 52 is determined than the priority of the first segments 511 classified as interior 11 of the vehicle 1. Consequently, the second segments 521 are superimposed on the first segments 521. segments 511 classified as interior 11 of vehicle 1. Still, said second image 52 can be represented transparently, so as to avoid completely hiding this previously defined content. [0244] [0245] Figure 8D shows, illustratively, the same view as in figures 8A, 8B and 8C, where the user 2 has made a third turning of the head 21 to the right, greater than the second turn. The second image 52 shown in the virtual environment 6 has moved to the right according to the variation of the field of view 22, with respect to the second position it occupied in figure 6C. In the same way as for Figure 8C, when facing an overlap of the second image 52 with relevant elements of the vehicle 1 present in the field of view 22 of the user 2, transparencies and movements of the second image 52 can be applied, according to what is explained for figures 6C and 6D. [0246] [0247] In figure 9A it is possible to observe, in an illustrative way, a perspective view of the virtual environment 6 observed by the driver in the passenger compartment of a vehicle 1 through the virtual reality device 3, with a second image 52 superimposed on at least one zone of the first image 51 and with additional objects 13 to the vehicle 1 superimposed on the second image 52, according to the present invention. [0248] [0249] Preferably, and by layered division of the field of view 22, the projection of the images 51, 52 will preferably be carried out in pixels of the interior 11, avoiding overlaying the second image 52 in pixels of the exterior 12 of the vehicle 1, since in that case the user's visibility would be reduced 2. Only in In cases of high emergency, I could decide to project the second image 52 superimposing it on pixels from the exterior 12 of the vehicle 1. [0250] [0251] In the generation of the virtual environment 6 shown in Figure 9A, a higher priority of the second segments 521 of the second image 52 is determined than the priority of the first segments 511 classified as interior 11 of the vehicle 1. Consequently, the second segments 521 are superimposed on the first segments 511 classified as interior 11 of the vehicle 1. Still, said second image 52 can be represented transparently, to avoid completely hiding this previously defined content. Additionally, a first priority 511 is defined as outer segments 12 of the vehicle 1 than the priority of the second segments 521 of the second image 52, so as to avoid superimposing the second segments 521 on the first segments 511 classified as exterior. of the vehicle 1. Additionally, a first priority 511 classified as additional object 13 of the vehicle 1 is determined higher than the priority of the second segments 521 of the second image 52, so that the first segments 511 classified as additional object 13 are superimposed. from the vehicle 1 to the second segments 521 to the first segments 511 classified as exterior 12 of the vehicle 1. For example, the need to superimpose in this case the hand and arm of the user 2, classified as additional object 13 of the vehicle, may be in basis to determine a movement or change of position of the additional object 13 of the vehicle 1, so that it determines the importance of representing said additional object 13 of vehicle 1 in the virtual environment. [0252] [0253] In this case it is necessary to move the position of the second projected image 52, or to apply transparencies on said second image 52, so that the user 2 can interact with the vehicle 1 correctly, that is, seeing what is being pressed . In this way, driving is safer. [0254] [0255] In particular, by way of example of risk table, with a risk that is determined as high, the second image 52 overlaps the inner layer 11, outer 12 and additional objects 13. With a risk that is determined as a means, the second image 52 overlaps the inner layer 11, and outer layer 12. And with a risk that is determined as low, the second image 52 is only superimposed on the inner layer 11. [0256] In figure 9B one can observe, the same view as in figure 9A. In this case, the superposition of the second image 52 on the additional objects 13 of the vehicle 1 is observed, altering at least one visual property of the second image 52, such as the transparency of the second image 52, so that the at least an additional object 13 of vehicle 1 is partially visible. [0257] [0258] In a preferred embodiment, the projection of images 51, 52, others of augmented reality or additional information may be combined. For example, signaling with another color the third vehicle with which there is a risk of collision, signaling with greater intensity of color the lane lines that are being invaded, or signaling the remaining meters to the preceding vehicle 1. [0259] [0260] The details, shapes, dimensions and other accessory elements, as well as the components used in the implementation of the method and system to display priority information in a vehicle, may be conveniently replaced by others that are technically equivalent, and do not depart from the essentiality of the invention or of the scope defined by the claims that are included below the following list. [0261] [0262] List of references: [0263] [0264] 1 vehicle [0265] 11 interior [0266] 12 outside [0267] 13 additional object [0268] 14 second camera [0269] 15 window area [0270] 2 user [0271] 21 head [0272] 22 field of vision [0273] 23 real vector [0274] 24 theoretical vector [0275] 3 virtual reality device [0276] 31 first camera [0277] sensor [0278] accelerometer [0279] gyroscope [0280] third camera [0281] screen [0282] memory unit processing unit [0283] first picture [0284] first segment [0285] second image [0286] second segment [0287] Virtual environment [0288] virtual segment [0289] means of detection of external image position [0290] distance sensor
权利要求:
Claims (1) [0001] 1- Method for displaying priority information in a vehicle (1), where a virtual reality device (3) is used by a user (2) inside (11) of the vehicle (1), where the method comprises the steps of : i) capturing at least a first image (51) of a user's environment (2), where the at least one first image (51) captured coincides with a field of vision (22) of the user (2), ii) obtaining a information about a vehicle environment (1), iii) determine a critical driving situation based on the information of the environment of the vehicle (1) obtained, iv) capturing at least one second image (52) of the environment of the vehicle (1), v) generating a virtual environment (6) of the user (2), where the virtual environment (6) comprises the at least one first image (51) captured and the at least one second image (52) captured, where the virtual environment ( 6) generated is based on the critical driving situation determined, and vi) display the virtual environment (6) generated by means of the virtual reality device (3). 2- Method according to claim 1, characterized in that the step of generating a virtual environment (6) comprises superimposing the at least one second image (52) on the at least one first image (51). 3- Method according to any of the preceding claims, characterized in that the step of generating a virtual environment (6) comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of the at least one second image (52) . 4- Method according to any of the preceding claims, characterized in that the step of generating a virtual environment (6) comprises altering a brightness, a color, a brightness, a contrast and / or a saturation of the at least one first image (51) . 5- Method according to any of the preceding claims, characterized in that the step of generating a virtual environment (6) comprises defining a dimension of the at least one second image (52). 6- Method according to any of the preceding claims, characterized in that the step of generating a virtual environment (6) comprises incorporating at least one additional virtual information, where the additional virtual information is incorporated in the at least one first image (51) and / or in the at least one second image (52). 7- Method according to any of the preceding claims, characterized in that it comprises a step of determining the field of vision (22) of the user (2), where the step of generating a virtual environment (6) comprises establishing a position of the at least one second image (52) in the virtual environment (6) based on the field of vision (22) of the user (2) determined. 8- Method according to claim 7, characterized in that establishing the position of the at least one second image (52) comprises moving the position of the second image (52) in the virtual environment (6) according to a variation of the field of view ( 22) of the determined user (2). 9- Method according to any of claims 7 or 8, characterized in that establishing the position of the at least one second image (52) comprises maintaining the position of the at least one second image (52) fixed in the virtual environment (6). Method according to any of the preceding claims, characterized in that the at least one first image (51) comprises at least a first segment (511), wherein the method comprises a step of classifying the at least one first segment (511) of the at least one first image (51) in: - interior (11) of the vehicle (1), - exterior (12) of the vehicle (1), and - additional object (13) of the vehicle (1). 11- Method according to claim 10, characterized in that the step of generating a virtual environment (6) of the user (2) is additionally based on the classification of the at least one first segment (511) of the at least one first image (51). ). 12- Method according to claim 11, characterized in that the at least one second image (52) comprises at least a second segment (521), wherein the step of overlaying the at least one second image (52) to the at least one first image (51) it comprises overlaying the at least one second segment (521) of the second image (52) to the first segment (511) of the interior (11) of the vehicle and / or to the first segment (511) of the exterior (12) of the vehicle (1) and / or to the first segment (511) of additional object (13) of the vehicle (1). 13- Method according to claim 11, characterized in that the step of moving the position of the second image (52) in the virtual environment (6) comprises superimposing the at least one second segment (521) of the second image (52) to successive first segments (511) of the interior (11) of the vehicle and / or successive first segments (511) of the exterior (12) of the vehicle (1) and / or to successive first segments (511) of additional object (13) of the vehicle (1) ). 14- System for displaying virtual reality information in a vehicle (1), where the system comprises: - a virtual reality device (3), wherein the virtual reality device (3) comprises at least one screen (37), - at least one first camera (31), wherein the at least one first camera (31) is configured to capture at least one first image (51), wherein the at least one first image (51) coincides with a field of view ( 22) of the user (2), - at least one sensing means, wherein the at least one sensing means is configured to obtain an information of an environment of the vehicle (1), and - at least one second camera (14), wherein the at least one second camera (14) is configured to capture at least one second image (52) of an environment of the vehicle (1), - at least one processing unit (4), where the at least one processing unit (4) is configured to determine a critical driving situation based on information from the vehicle environment (1), and generate a virtual environment ( 6) of the user (2), where the virtual environment (6) comprises the at least one first image (51) captured and the at least one second image (52) captured, where the virtual environment (6) generated is based on the critical driving situation determined. 15- System according to claim 14, characterized in that it comprises at least one position detection means (7), where the position detection means (7) is configured to determine a position of the virtual reality device (3) in the vehicle (one).
类似技术:
公开号 | 公开日 | 专利标题 US8536995B2|2013-09-17|Information display apparatus and information display method ES2533594T3|2015-04-13|Rear view imaging systems for vehicles US9086566B2|2015-07-21|Monocular head mounted display ES2660994T3|2018-03-27|Display device for vehicles, in particular light commercial vehicles US20180268701A1|2018-09-20|Vehicle display system and method of controlling vehicle display system US9802540B2|2017-10-31|Process for representing vehicle surroundings information of a motor vehicle WO2014130049A1|2014-08-28|Systems and methods for augmented rear-view displays JP2009227018A|2009-10-08|Anti-dazzle device for vehicle US20190100145A1|2019-04-04|Three-dimensional image driving assistance device JP6731116B2|2020-07-29|Head-up display device and display control method thereof US11181743B2|2021-11-23|Head up display apparatus and display control method thereof WO2019146162A1|2019-08-01|Display control device and display system ES2704327B2|2020-02-21|Method and system to display virtual reality information in a vehicle JP2008037118A|2008-02-21|Display for vehicle ES2704373A1|2019-03-15|Method and system to show virtual reality information in a vehicle | ES2704350B2|2020-03-17|Method and system to display priority information in a vehicle KR20200005865A|2020-01-17|Wide area surround view monitoring apparatus for vehicle and control method thereof JP2014036326A|2014-02-24|Bird's eye image display device JPWO2018030320A1|2019-06-13|Vehicle display device JP2019099030A|2019-06-24|Display device for vehicle JP2009214831A|2009-09-24|Display for vehicle US20200148112A1|2020-05-14|Driver-assistance device, driver-assistance system, method of assisting driver, and computer readable recording medium WO2017024458A1|2017-02-16|System, method and apparatus for vehicle and computer readable medium JP2019077302A|2019-05-23|Display control device, display system, and display control method KR20120066927A|2012-06-25|Apparatus and method for displaying blind spot by car
同族专利:
公开号 | 公开日 ES2704350B2|2020-03-17| EP3457363A1|2019-03-20|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20140336876A1|2013-05-10|2014-11-13|Magna Electronics Inc.|Vehicle vision system| US20170113702A1|2015-10-26|2017-04-27|Active Knowledge Ltd.|Warning a vehicle occupant before an intense movement| WO2017095790A1|2015-12-02|2017-06-08|Osterhout Group, Inc.|Improved safety for a vehicle operator with an hmd| EP1990674A1|2007-05-09|2008-11-12|Harman Becker Automotive Systems GmbH|Head-mounted display system| US20160187651A1|2014-03-28|2016-06-30|Osterhout Group, Inc.|Safety for a vehicle operator with an hmd| US10040394B2|2015-06-17|2018-08-07|Geo Semiconductor Inc.|Vehicle vision system| US10373378B2|2015-06-26|2019-08-06|Paccar Inc|Augmented reality system for vehicle blind spot prevention| US20170161949A1|2015-12-08|2017-06-08|GM Global Technology Operations LLC|Holographic waveguide hud side view display|
法律状态:
2019-03-15| BA2A| Patent application published|Ref document number: 2704350 Country of ref document: ES Kind code of ref document: A1 Effective date: 20190315 | 2020-03-17| FG2A| Definitive protection|Ref document number: 2704350 Country of ref document: ES Kind code of ref document: B2 Effective date: 20200317 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 ES201731120A|ES2704350B2|2017-09-15|2017-09-15|Method and system to display priority information in a vehicle|ES201731120A| ES2704350B2|2017-09-15|2017-09-15|Method and system to display priority information in a vehicle| EP18194941.3A| EP3457363A1|2017-09-15|2018-09-17|Method and system for displaying priority information in a vehicle| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|